Part 4
ABAQUS INP COMPREHENSIVE ANALYZER
Under the Hood — A Deep-Dive Series
PART 4
INP Export, Sub-Assembly Extraction,
and the Full Output Pipeline
From Analysis to Action — Writing Valid Abaqus Files
Joseph P. McFadden Sr.
McFaddenCAE.com | The Holistic Analyst
© 2026 Joseph P. McFadden Sr. All rights reserved.
Setup — Why Export Matters
The first three parts of this series covered the inward journey: reading a file, turning it into structured data, extracting geometry, displaying properties, running diagnostics. All of that is analysis. You are consuming information about a model that already exists.
Part Four is about the outward journey: producing new files from what the program has learned. This is where analysis becomes action.
The export system has several distinct output types: INP sub-assembly files, STL geometry files, STEP files for CAD tools, CSV spreadsheets, and DOT graph files. Each serves a different downstream purpose. Each has its own requirements for what a valid output looks like.
We will go through all of them, but the deepest attention will go to the INP export — because writing a valid Abaqus input file is the most structurally demanding task in the output pipeline. It requires the program to reconstruct a document format that a solver will actually execute, and that means getting the structure exactly right.
Section 1 — What a Valid Abaqus INP File Must Contain
Before we can talk about how the program writes an INP file, we need to establish what a valid INP file must contain. The requirements are not arbitrary — they reflect the Abaqus solver's expectations for model input.
At minimum, a self-contained INP file that an Abaqus job can run needs these structural blocks in this order.
First, a heading block. The *Heading keyword followed by comment lines. This is metadata — the solver ignores it, but it is where you put the model name, date, author, and any notes. Well-formatted models always have this.
Second, part definitions. Each part is wrapped in a Part and End Part block. Inside the part: a Node block with all the node coordinates, one or more Element blocks grouped by element type, and a section definition — Solid Section or Shell Section — that assigns a material to the element set.
Third, an assembly block. The Assembly keyword opens it, End Assembly closes it. Inside the assembly, each part is instantiated with a Instance and End Instance block. The instance gives the part a position in the global coordinate system. It also allows the same part geometry to be reused at multiple positions — but in a sub-assembly export, each part typically has one instance.
Fourth, material definitions. These go after the assembly block in Abaqus CAE-format files. Each material begins with Material and a NAME parameter, followed by property keyword blocks — Elastic, *Density, and so on — each followed by their data lines.
Fifth, at least one step definition — Step through End Step — with a procedure keyword inside it, output requests, and any boundary conditions or loads. A model without a step definition has no analysis to run, though it is valid as a model definition.
The word 'order' is critical here. Abaqus reads the INP file sequentially. Parts must be defined before they are instantiated. Materials must be defined before they are referenced by sections. The assembly must close before material definitions begin. Steps come last.
A program that writes these blocks in the wrong order produces a file that either errors on input or produces silent garbage. This is exactly why the V15.0 version of the program — documented in the version history — included a dedicated fix for proper keyword casing and correct section definition placement. Earlier versions had produced files where material definitions appeared before the assembly block, which is not how Abaqus CAE exports structure the file.
Section 2 — The Heading Block and Encoding
The first thing the INP exporter writes is the heading block. Let us look at what it contains and why each line is there.
The block starts with the literal text *Heading. The case matters — Abaqus keywords in CAE-format files are mixed case, not all uppercase. The V15.0 fix specifically addressed this: earlier versions wrote keywords in all caps, which while technically valid, was inconsistent with what Abaqus CAE produces and could confuse tools that pattern-match against expected output formats.
The comment lines that follow use the double-asterisk comment prefix. They record: the export description, the number of parts included, the version of the analyzer that produced the file, the author name and contact email, the date of export formatted as a full month-day-year string using Python's datetime module, and the list of part names included in the export.
The *Preprint keyword with echo=NO, model=NO, history=NO, and contact=NO is written next. This suppresses the large diagnostic printout that Abaqus would otherwise write to the data file. In production runs you almost always want this suppressed — the default Abaqus verbosity produces files that can be many megabytes larger than needed.
Encoding: The UTF-8 Fix
Every file the program writes — INP exports, STL files, CSV exports, DOT graphs — is opened with the explicit encoding parameter set to UTF-8. This was a bug fix that appeared in the version history as resolution of Windows charmap encoding errors.
The issue is a Python behavior difference between operating systems. On Windows, when you open a file for writing without specifying an encoding, Python uses the system's default encoding — which on English Windows is typically CP-1252, often called Windows-1252. On Linux and macOS, the default is UTF-8.
If a material name, part name, or any other string in the model contains a character outside the basic ASCII range — an accented letter, a special symbol, a Unicode character used by some CAD tools in their generated names — writing it under CP-1252 either silently mangles it or raises a charmap error that crashes the export.
The fix is always explicit: open every output file with encoding=UTF-8. This produces consistent behavior on all platforms and handles the full Unicode character space. It is a two-word fix with no downsides, and the cost of not having it is a class of intermittent crashes that are hard to reproduce without the exact model that triggered them.
Section 3 — Writing Parts: Nodes, Elements, and Section Definitions
After the heading, the exporter writes the part blocks. This is the most data-intensive section of the output — it contains every node coordinate and every element connectivity record for every part being exported.
Node Writing
For each part, the exporter calls the part-aware node lookup method described in Part Two — the multi-strategy resolver that handles both integer-keyed orphan mesh models and tuple-keyed structured models. The result is a flat integer-keyed dictionary of node coordinates, containing only the nodes belonging to this specific part.
The nodes are written in sorted order by node ID. Sorting is not strictly required by Abaqus — the solver can handle nodes in any order — but it produces deterministic, human-readable output. If you export the same model twice, you get the same file. That matters for version control and for debugging.
Each node line has the format: node ID, comma, X coordinate, comma, Y coordinate, comma, Z coordinate. For two-dimensional nodes — models that have only X and Y — the third coordinate is omitted. The coordinates are written as floating-point numbers in their full precision as Python's string conversion of a float produces them. No rounding, no reformatting.
Element Writing — Grouped by Type
Elements are written grouped by element type. The exporter calls the part-aware element retrieval method, collects all elements for the part, then groups them by their type string. Each group gets its own *Element keyword line with the TYPE parameter set to the element type string and an ELSET parameter naming the element set.
Within each group, elements are sorted by element ID — again for determinism. Each element line is: element ID, comma, then the node IDs in their connectivity order joined by commas.
Here is a subtlety. For elements with many nodes — C3D20R with twenty nodes, for example — the connectivity list on a single line can be quite long. Abaqus has a line length limit. The exporter writes these as single lines regardless, relying on Abaqus to handle continuation. This works for most modern Abaqus versions, but it is worth noting as a difference from the canonical CAE-format which wraps long connectivity lines. The V15.7 fix addressed the inverse problem on the reading side — handling files with wrapped lines. The writing side is a known simplification.
Section Definition — Solid versus Shell
After the elements, the section definition is written. The exporter checks the element types present in the part to determine whether this is a solid or a shell part.
The detection logic checks the element type strings for shell indicators: any type containing S3 or S4 — the shell element family prefixes — or M3D — the membrane element prefix — is classified as a shell part. Everything else is classified as solid.
For solid parts: *Solid Section with ELSET and MATERIAL parameters. No additional data lines.
For shell parts: *Shell Section with ELSET and MATERIAL parameters, followed by a data line containing the thickness value. The thickness comes from the part data dictionary — it was captured during the second parse pass when the original shell section's first data line was read. If no thickness was captured, a default of 1.0 is written with no annotation — a known weakness that the analyst must verify.
The section definition is followed by *End Part and a comment separator. Then the next part begins.
Section 4 — The Assembly Block and Instance Declarations
After all part blocks are written, the assembly block opens. This is the global coordinate frame. It is where the solver is told that the parts it just read are being placed into a simulation together.
The assembly block starts with *Assembly, NAME=Assembly. In the exported file, the assembly is always named Assembly — singular, capitalized. This is the standard convention and what Abaqus CAE produces.
Inside the assembly, each part gets one instance declaration. The format is: Instance, NAME=export-name-1, PART=export-name. Then End Instance.
The naming convention is worth explaining. The instance name is the export name followed by a hyphen and the digit 1. Abaqus distinguishes between part names and instance names — a part can be instantiated multiple times, each with a different name and position. By convention, the first instance of a part named Housing is named Housing-1. The program follows this convention for all exported instances.
No translation or rotation transformations are written for the instances. The parts are positioned at the origin in their own coordinate system, and the instance places them at the same origin in the assembly. This means the exported assembly reflects the parts' positions as they were in the original model — which is correct when you are extracting a subset of an assembly that was already spatially positioned.
If the original model had instance transformations — *Instance blocks with translation and rotation data — those transformations are not propagated to the export. This is a known limitation. The export preserves geometry and material data but not global positioning. For sub-assembly extractions used for independent analysis, this is usually acceptable: the analyst defines new boundary conditions and loads relative to the sub-assembly anyway.
Interface Node Sets Inside the Assembly
When the include interface node sets option is checked in the export dialog, the assembly block also contains node set definitions for each part's interface nodes.
Interface nodes — those shared between two or more parts, identified during the find_interface_nodes pass described in Part One — represent the physical connection boundaries between parts. Knowing which nodes sit at those boundaries is essential for applying boundary conditions in a sub-assembly analysis: you typically fix the interface nodes to represent the constraint imposed by the rest of the assembly.
The node set is named using the export name followed by IFACENODES. It references the specific instance using the instance name. The node IDs are written in rows of sixteen per line — a formatting convention that keeps lines readable and matches Abaqus CAE output style.
The interface node set does not prescribe any specific boundary condition. It is simply a named collection of nodes that the analyst can reference when defining constraints in the analysis step. The responsibility for choosing the right boundary condition — fully fixed, pinned, spring-supported — belongs to the analyst. The tool provides the set; you decide what to do with it.
Section 5 — Material Definitions: Reconstructing Property Blocks
After the assembly block closes, the material definitions are written. This sequence — parts first, assembly second, materials third — is the correct CAE-format ordering, and it differs from some legacy INP formats where materials appear before the assembly.
The exporter collects the set of unique material names used by all exported parts. It then retrieves the property data for each material from the mat_props dictionary — the same dictionary built during the second parse pass.
For each material, it writes: *Material, NAME=material name. Then for each property record in that material's list: the property keyword with its parameters, followed by each data row as a comma-joined line of values.
The property keywords are written exactly as they were parsed — ELASTIC, DENSITY, PLASTIC, EXPANSION, and so on. If the original keyword had parameters — for example, MODULI=LONG TERM on a viscoelastic definition — those parameters are written back. This round-trip fidelity is important: the exported material should behave identically to the original in a solver run.
Data rows are written as comma-separated floating-point values. The values come directly from the parsed data lists — the same floating-point numbers that the Property Viewer tab displays. No rounding, no reformatting. What was in the original file is what goes into the exported file.
One practical consequence of this fidelity: if the original file had numeric precision that was truncated by the file's author — say, a density written as 7.8e-9 rather than 7.85e-9 — that truncation is preserved in the export. The program does not restore precision that was not in the source file. It cannot. The parser only has access to what was written.
Section 6 — The Analysis Step: Placeholders and Presets
Every valid Abaqus INP file that an analyst will actually run needs an analysis step. A sub-assembly export from a larger model has no analysis step by default — the original model's steps reference the full assembly with its original boundary conditions and loads, none of which transfer to the sub-assembly.
The export dialog handles this with two paths.
The first path — the default — writes a minimal placeholder. Three comment lines explain that no step was defined and invite the analyst to add their own Step definition. A lone End Step closes the block. This produces a syntactically valid file that will not run without modification, but will import into Abaqus CAE without errors.
The second path — activated by checking the modal step template checkbox — writes a complete Frequency extraction step. This is a one-click way to set up a sub-assembly for natural frequency analysis.
The Modal Step Template
The modal step is written as: Step, NAME=Frequency, PERTURBATION. The perturbation flag marks this as a linear perturbation step, which is required for eigenvalue extraction. Then Frequency, EIGENSOLVER=Lanczos, ACOUSTIC COUPLING=on, NORMALIZATION=displacement.
Lanczos is the standard eigensolver for large models — it is iterative and memory-efficient. The displacement normalization option normalizes mode shapes so that the maximum displacement component equals one. This is the most common normalization convention in structural dynamics work.
The number of modes to extract is taken from the modal modes entry in the dialog — defaulting to twenty but adjustable. A mode count suggestion is also displayed in the dialog based on the element count of the selected parts. The rule of thumb encoded is: below 20,000 elements, extract 40 modes; below 100,000, extract 30; below 300,000, extract 20; above that, 15. This is not a physical rule — it is an engineering heuristic based on practical experience with what mode counts tend to be sufficient for sub-assembly validation.
The output requests in the modal step template are minimal: Output, FIELD; Node Output requesting displacement U; *Element Output requesting stress S and strain E. These are the fields you need to visualize mode shapes in a post-processor. History output is left empty — in a modal analysis you rarely need history data.
Section 7 — Submodel Options: Cut Sets and Surfaces
The export dialog includes a submodel preset that activates a set of additional outputs useful when the exported sub-assembly will be used as a submodel — a focused high-fidelity analysis driven by results from the global model.
Submodeling in Abaqus is a two-stage process. In stage one, a global model runs — typically a coarser mesh of the full assembly. The solver saves displacement results at the boundaries of the submodel region. In stage two, the submodel runs with its boundaries driven by those interpolated displacement results from stage one. This lets you get high-fidelity results in a critical region without running the entire assembly at high mesh density.
For this workflow to work, the submodel needs specific infrastructure: node sets that identify which nodes sit at the driven boundary — the cut — and, optionally, element-based surfaces that define the cut geometry for contact or pressure applications.
Cut Node Sets
When the write cut sets option is active, the exporter adds node set definitions for each part's interface nodes under the name SUBMODEL_CUT_NODES_export-name. These are the nodes at the boundary between the extracted sub-assembly and the rest of the original model — exactly the nodes that will receive the driven boundary conditions from the global analysis.
A global combined cut set named SUBMODEL_CUT_NODES collects all interface nodes across all exported parts. This is what you reference in the submodel driving keyword in Abaqus — it tells the solver which nodes to interpolate displacements from the global ODB file onto.
Surface Definitions
The write surfaces option adds comment-block placeholders for element-based surface definitions — EXT surfaces covering the exterior faces of each part, and CUT surfaces covering the cut faces at the boundary.
Element-based surface definitions in Abaqus require specifying which face of which element set is part of the surface — for example, the S1 face of ELSET-NAME. This information cannot be derived from node sets alone; it requires knowing which element faces are on the boundary, which is exactly what the face counting algorithm from Part Two produces.
In the current implementation, the surface definitions are written as annotated comment placeholders rather than active definitions. This is an acknowledged limitation: generating fully specified element-based surfaces from the face extraction data is on the roadmap, but the comment placeholders document the intent and provide the analyst with the structure to complete manually.
Section 8 — Export Naming: ELSET, Part Name, and Generic Modes
The export dialog offers three naming modes for the exported parts: ELSET-based, part-name-based, and generic sequential numbering. Understanding why all three exist requires understanding the naming landscape of real Abaqus models.
In a structured model — one with explicit *Part blocks — part names are clear: the program read them from the file. Housing is Housing. PCB is PCB. These names carry semantic meaning.
In an orphan mesh — where parts were reverse-engineered by material-section grouping — the program assigned generic names: Part-1, Part-2, Part-3. These names are placeholders. They describe position in a list, not physical identity.
In a vendor-supplied model — an orphan mesh from a third-party tool — the element set names embedded in the file may be the most stable and meaningful identifiers available. An element set named PCB_SOLID or HOUSING_SHELL tells you more than Part-3.
ELSET Mode — The Stable Default
ELSET mode is the default because it uses the most stable available name. The suggest_name function for ELSET mode looks at the part data dictionary and retrieves the first entry in the elsets list — the element sets associated with this part. If there is an ELSET name available, it becomes the export name.
Stability matters for workflow reasons. If you export a sub-assembly today, review it, and export it again next week from the same source file, you want the exported part names to be the same. Names derived from ELSET definitions in the file are stable across exports as long as the source file has not changed. Names derived from the analysis-identified part order — Part-1, Part-2 — could shift if the ordering changes between versions.
Part Name Mode
Part name mode uses the internal part name as identified by the program — either the name from the *Part block in a structured model, or the reverse-engineered name from orphan mesh analysis. This is the name you see in the Parts tab list.
Generic Mode
Generic mode produces Part_001, Part_002, and so on — zero-padded sequential numbers. This is useful when you are producing a clean export for a recipient who does not need or want model-derived names. Consistent formatting across parts of different origins.
Uniqueness Enforcement and Custom Overrides
Whatever naming mode is chosen, the program enforces uniqueness. If two parts would receive the same suggested name — which can happen when multiple orphan mesh parts share an element set name, for example — the duplicate names are de-duplicated by appending an underscore and a counter. Part-1, Part-1 becomes Part-1, Part-1_2.
The custom checkbox per row allows you to override the suggestion for any individual part. Checking it marks that row as custom, and a dialog prompts for a replacement name at export time. Rows not marked custom keep their suggested name. Refreshing the suggestion mode — switching from ELSET to PartName, for example — updates only the non-custom rows, preserving any names you have explicitly set.
Section 9 — Writing the File: Lines, Joins, and the Output Path
The entire INP export is built as a Python list of strings — one string per line of the output file. The header lines, the node lines, the element lines, the keyword lines, the material data lines — all appended to the same list in the order they must appear in the file.
When the list is complete, the file is written in a single operation: the list is joined with newline characters and written to disk, followed by a final newline. This two-step approach — build list, then write — is preferred over writing line by line because it keeps the file operation as a single atomic action. Either the whole file is written correctly, or the write fails and nothing is on disk yet.
Python's file join pattern — '\n'.join(lines) — produces one newline between every pair of adjacent lines. The trailing write of a newline after the join adds the final newline that terminates the last line of the file. Text files that do not end in a newline are technically malformed on Unix systems, and some parsers — including Abaqus on Linux — can behave unexpectedly at the last line if it is not newline-terminated.
The output file path comes from the file save dialog — a standard operating system file chooser. The default extension is .inp and the default filename is a descriptive string built from the number of parts being exported. The user can navigate anywhere on disk and name the file anything they want. The program does not impose any path restrictions.
The file is opened with the write mode flag and encoding=UTF-8 — the same encoding convention applied throughout all output operations, as discussed in Section 2. There are no intermediate temp files, no staging area, no rename-on-completion. The target file is written directly. If the write fails partway through — disk full, permissions error, path not found — the exception bubbles up, is caught by the outer try-except block in the export function, and an error dialog is shown. No partial file is silently left on disk.
Section 10 — Single-Part Export versus Multi-Part Export
The export system has two entry points: single-part and multi-part. They share the same underlying writer logic but differ in how they are invoked and what dialog they present.
Single-Part Export
Single-part export is triggered from the Parts tab when exactly one part is selected and you click the INP export button. The program identifies the part name using the part index keys list — a parallel list maintained alongside the Parts tab listbox that stores the actual part names without the display formatting.
This parallel index list is an important implementation detail. The Parts tab listbox displays formatted strings: part name, element count, material, volume, mass. Parsing the part name back out of that formatted string would be fragile — any change to the display format would break the parser. The parallel index list avoids this entirely: the display string and the actual name are stored separately, and selection uses the index to look up the name from the clean list rather than from the display text.
Multi-Part Export
Multi-part export is triggered when multiple parts are selected — using control-click or shift-click in the Parts tab listbox — and you click the export button. All selected indices are converted to part names via the same index lookup, and the list of names is passed to the multi-part export dialog.
The multi-part dialog adds the naming controls, the submodel options, and the modal step template controls described in the previous sections. It also shows summary statistics at the top: total parts, total element count, and total mass. For large exports — assemblies with dozens of parts — this summary tells you immediately whether the selection is what you intended.
The underlying writer function — export_multiple_parts — is the same regardless of whether one part or one hundred parts are being exported. A single-part export simply calls it with a one-element list. This is deliberate: keeping one code path for all INP exports means one set of tests, one set of bugs, and one place to make improvements.
Section 11 — CSV Exports: Material Mapping and Part Properties
The CSV export system produces two types of spreadsheet output: material-section mapping files and part property files. Both are formatted for consumption in Excel, Python, or any tabular data tool.
Material Section CSV
The Export Material CSV button in the Materials tab writes the mapping for a selected material. The columns are: Material, Section Type, Section Raw, ELSET, and Parts. The Section Raw column contains the original keyword line exactly as it appeared in the INP file — the complete text including keyword and all parameters. This preserves full fidelity for downstream tools that need to reconstruct section definitions.
The Parts column is a semicolon-joined list of all part names associated with each section entry. This is because a single section can be associated with multiple parts — when an element set spans multiple part contexts — and a single CSV cell is the most compact representation.
Part Properties CSV
The Export Parts CSV button writes part-level data for all parts using the selected material. The columns are: Material, Part, Elements, Volume (mm³), Mass (kg), and Density. The volume and mass values come from the part properties dictionary populated by the Gaussian quadrature calculation during processing.
The values are formatted with deliberate precision: volume to four decimal places, mass to six decimal places, density in scientific notation with two significant figures. These precision choices reflect the expected useful range: volume differences below the fourth decimal place are below the noise of mesh discretization, mass differences below the sixth decimal place are below the practical accuracy of density data, and density spans many orders of magnitude so scientific notation is the clearest representation.
Both CSV writers open their output files with newline='' — a Python csv module requirement. Without this argument, the csv.writer module adds an extra carriage return on Windows, producing files with blank rows between every data row when opened in Excel. This is a common Python CSV pitfall that the program handles correctly.
Section 12 — The DOT Graph Export: Relationships as a Network
The DOT graph export produces a file in the DOT language format — the input format for Graphviz, an open-source graph rendering toolkit. The output is a directed graph showing the relationships between materials, sections, and parts.
DOT is a text-based format. The file opens with the keyword digraph G followed by an opening brace. Inside, each node in the graph is declared with a label. Each directed edge is declared with an arrow operator.
For a material named Steel-304 with two section definitions, each referencing different parts, the graph looks like this. A node for Steel-304 labeled as a material. A node for each section definition — sec0, sec1 — with the section raw text as the label. An edge from Steel-304 to sec0, another from Steel-304 to sec1. Then edges from each section node to each part it references.
The result, when rendered by Graphviz, is a visual diagram showing the material at the top, sections branching from it, and parts at the leaves. This makes the material DNA of the model visible as a graph structure — you can immediately see whether a material is used in one place or twenty, whether sections share parts, and whether any parts appear in multiple material branches.
DOT files can be rendered by the Graphviz command-line tool with the command: dot -Tpng filename.dot -o filename.png. They can also be opened by online Graphviz renderers, by the Gephi network visualization tool, and by VS Code extensions. The format is widely supported.
The DOT export is a small function — under thirty lines of code — but it illustrates an important design principle: the material-section-part mapping that was built for internal analysis purposes can be repurposed as a network representation without any change to the underlying data. The data structure was rich enough to support multiple output representations. Building data structures that support multiple views of the same information is a hallmark of well-designed engineering software.
Section 13 — The STEP Export Path
The program includes a STEP export capability — producing ISO 10303 STEP files, the standard CAD exchange format that allows mesh geometry to be imported into SolidWorks, CATIA, NX, and other CAD tools as solid bodies rather than triangulated surfaces.
STEP export is architecturally different from STL export. An STL file is just triangles — it makes no attempt to represent the geometric topology of the original solid. A STEP file represents B-Rep geometry: Boundary Representation, the native data structure of CAD solids. B-Rep stores the faces, edges, vertices, and their topological relationships as analytical surfaces — planes, cylinders, spheres — not as triangle approximations.
Converting a finite element mesh into B-Rep geometry is a hard problem. The mesh is discrete. B-Rep is continuous. The conversion requires fitting analytical surface patches to the triangulated exterior surfaces, detecting sharp edges and blending curvature, and building the topological relationships between faces.
The program delegates this to an external module: step_exporter.py, which in turn uses PythonOCC — the Python binding for OpenCASCADE, an open-source CAD kernel. OpenCASCADE is the same geometric engine used by FreeCAD, Salome, and several other open-source CAD tools. It includes facilities for constructing and exporting B-Rep solids.
The STEP exporter module is optional — if PythonOCC is not installed, the STEP export button in the interface is disabled and a message explains the dependency. This is consistent with the design philosophy of the whole program: zero required dependencies for the core features, optional dependencies for advanced capabilities that the user can install when needed.
The STEP export dialog shows a brief explanation of what the operation does and what to expect. It notes that the conversion is from mesh to CAD surfaces — which is an approximation — and that the quality of the resulting solid depends on the mesh density and regularity. A coarse mesh produces a faceted STEP body. A fine mesh with curved elements produces a smoother approximation.
For engineers who need to pass geometry back to a CAD system for drawing creation, tolerance analysis, or tooling design, even a faceted STEP body is more useful than a triangulated STL. The STEP format preserves the concept of faces and edges that CAD tools can reason about, while STL is opaque to topology.
Section 14 — The Summary Report: Text as Output
There is one more output type worth covering before we close: the model summary report — the text that populates the Summary tab after processing.
The summary is generated by the summarize function — or its unit-aware version, summarize_with_units, when a unit system has been confirmed. Both functions take the info dictionary — the high-level model census collected during the first parse pass — and build a formatted text report.
The report is built as a list of strings, exactly like the INP export, and then joined into a single text block. This text block is inserted into the Summary tab's scrolled text widget. The same text can be copied to the clipboard with a single right-click.
The report sections mirror the info dictionary structure: files and include chain, simulation type, mesh statistics with element type breakdown, materials and sections, assembly instances and transformations, element and node sets, surface definitions, analysis steps with time periods and incrementation, initial conditions, predefined fields, gravity and distributed loads, amplitude definitions, contact pairs and general contact status, mass scaling settings, bulk viscosity parameters, field and history output requests, and hourglass control settings.
When a unit system is confirmed, the unit-aware version adds dimension labels to the numerical values. Mesh dimensions in millimeters or inches depending on the unit system. Time periods in seconds or milliseconds. Material property values with their appropriate units. The same numbers, but now with the context that tells you what they mean physically.
The summary is not exported to a file automatically. It is a display artifact — it exists in the interface for reference. If you want to save it, you copy and paste it. This is an intentional design decision: the summary is a review tool, not a deliverable. Deliverables are the INP exports, the CSVs, the STLs, the STEP files. The summary is for the analyst's eyes during the review session.
Section 15 — The Complete Output Pipeline, End to End
Final numbered summary.
1. User selects one or more parts in the Parts tab and clicks an export button — INP, STL, STEP, or the CSV and DOT buttons from the Materials tab.
2. The part index keys list translates the listbox selection indices to actual part names without parsing display strings.
3. For INP: the multi-part export dialog presents naming controls, submodel options, and modal step template. The name mode dropdown generates suggestions from ELSET, part name, or generic numbering. Custom overrides prompt individually. Uniqueness is enforced by de-duplication.
4. The INP writer builds a line list. Heading block with metadata and Preprint suppression. For each part: Part, Node block sorted by ID, Element blocks grouped and sorted by type, section definition with shell detection, *End Part. Assembly block with instances and interface node sets. Material definitions from the mat_props dictionary. Analysis step — placeholder or modal template. Submodel cut sets and surface stubs if requested.
5. The line list is joined with newlines and written to disk in a single operation with UTF-8 encoding.
6. For STL: face extraction runs on the part elements, connectivity validation filters malformed elements, exterior faces are identified, triangles are generated with optional quadratic subdivision, the triangle list is written in binary or ASCII format.
7. For STEP: PythonOCC is called with the triangle list to construct a B-Rep solid and export the STEP file.
8. For CSV: material-section or part-property data is formatted with deliberate precision choices and written with correct newline handling for cross-platform compatibility.
9. For DOT: the material-section-part mapping is traversed to build a directed graph and written in DOT format for Graphviz rendering.
10. Every output file is opened with encoding=UTF-8 to prevent Windows charmap encoding errors on models with non-ASCII characters in names.
That is the complete output pipeline. From selection in the interface to file on disk, with every structural decision traceable to a requirement.
Closing — The Act of Writing
Reading a file is an act of discovery. Writing a file is an act of commitment.
When you export an INP sub-assembly from this tool, you are committing to a specific structural description of a specific collection of parts, with specific material properties, with specific interface nodes identified. That file is now an artifact. It can be versioned, shared, run, and the results can be compared to results from the full model. The sub-assembly analysis is either consistent with the global model or it is not, and the comparison tells you something real.
Every design decision in the export system — the correct block ordering, the UTF-8 encoding, the sorted node and element output, the non-destructive in-memory approach, the ELSET-based stable naming, the interface node sets for boundary conditions — serves this commitment. The exported file should be correct, reproducible, and complete enough for the analyst to run an independent analysis without additional manual editing.
Part Five of this series will cover the Learning Center — the built-in educational reference system, the topic library covering unit systems, element types, common failure modes, and simulation pitfalls, and how the program uses the same parsed model data to contextualize that reference material to what is actually in the model you are working with.
Source code, documentation, and all companion readers are at McFaddenCAE.com.
End of Part 4 — INP Export, Sub-Assembly Extraction, and the Full Output Pipeline
Next: Part 5 — The Learning Center, the Topic Library, and Model-Contextualized Reference
© 2026 Joseph P. McFadden Sr. All rights reserved. | McFaddenCAE.com